Depth-Width Tradeoffs in Approximating Natural Functions with Neural Networks

نویسندگان

  • Itay Safran
  • Ohad Shamir
چکیده

We provide several new depth-based separation results for feed-forward neural networks, proving that various types of simple and natural functions can be better approximated using deeper networks than shallower ones, even if the shallower networks are much larger. This includes indicators of balls and ellipses; non-linear functions which are radial with respect to the L1 norm; and smooth non-linear functions. We also show that these gaps can be observed experimentally: Increasing the depth indeed allows better learning than increasing width, when training neural networks to learn an indicator of a unit ball.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Depth Separation in ReLU Networks for Approximating Smooth Non-Linear Functions

We provide several new depth-based separation results for feed-forward neural networks, proving that various types of simple and natural functions can be better approximated using deeper networks than shallower ones, even if the shallower networks are much larger. This includes indicators of balls and ellipses; non-linear functions which are radial with respect to the L1 norm; and smooth non-li...

متن کامل

Universal Function Approximation by Deep Neural Nets with Bounded Width and ReLU Activations

This article concerns the expressive power of depth in neural nets with ReLU activations and bounded width. We are particularly interested in the following questions: what is the minimal width wmin(d) so that ReLU nets of width wmin(d) (and arbitrary depth) can approximate any continuous function on the unit cube [0, 1] aribitrarily well? For ReLU nets near this minimal width, what can one say ...

متن کامل

Approximating Continuous Functions by ReLU Nets of Minimal Width

This article concerns the expressive power of depth in deep feed-forward neural nets with ReLU activations. Specifically, we answer the following question: for a fixed d ≥ 1, what is the minimal width w so that neural nets with ReLU activations, input dimension d, hidden layer widths at most w, and arbitrary depth can approximate any continuous function of d variables arbitrarily well. It turns...

متن کامل

Identification of Crack Location and Depth in a Structure by GMDH- type Neural Networks and ANFIS

The Existence of crack in a structure leads to local flexibility and changes  the stiffness and dynamic behavior of the structure. The dynamic behavior of the cracked structure depends on the depth and the location of the crack. Hence, the changes in the dynamic behavior in the structure due to the crack can be used for identifying the location and depth of the crack. In this study the first th...

متن کامل

The Expressive Power of Neural Networks: A View from the Width

The expressive power of neural networks is important for understanding deep learning. Most existing works consider this problem from the view of the depth of a network. In this paper, we study how width affects the expressiveness of neural networks. Classical results state that depth-bounded (e.g. depth-2) networks with suitable activation functions are universal approximators. We show a univer...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2017